42 research outputs found

    Implications of stochastic ion channel gating and dendritic spine plasticity for neural information processing and storage

    Get PDF
    On short timescales, the brain represents, transmits, and processes information through the electrical activity of its neurons. On long timescales, the brain stores information in the strength of the synaptic connections between its neurons. This thesis examines the surprising implications of two separate, well documented microscopic processes — the stochastic gating of ion channels and the plasticity of dendritic spines — for neural information processing and storage. Electrical activity in neurons is mediated by many small membrane proteins called ion channels. Although single ion channels are known to open and close stochastically, the macroscopic behaviour of populations of ion channels are often approximated as deterministic. This is based on the assumption that the intrinsic noise introduced by stochastic ion channel gating is so weak as to be negligible. In this study we take advantage of newly developed efficient computer simulation methods to examine cases where this assumption breaks down. We find that ion channel noise can mediate spontaneous action potential firing in small nerve fibres, and explore its possible implications for neuropathic pain disorders of peripheral nerves. We then characterise the magnitude of ion channel noise for single neurons in the central nervous system, and demonstrate through simulation that channel noise is sufficient to corrupt synaptic integration, spike timing and spike reliability in dendritic neurons. The second topic concerns neural information storage. Learning and memory in the brain has long been believed to be mediated by changes in the strengths of synaptic connections between neurons — a phenomenon termed synaptic plasticity. Most excitatory synapses in the brain are hosted on small membrane structures called dendritic spines, and plasticity of these synapses is dependent on calcium concentration changes within the dendritic spine. In the last decade, it has become clear that spines are highly dynamic structures that appear and disappear, and can shrink and enlarge on rapid timescales. It is also clear that this spine structural plasticity is intimately linked to synaptic plasticity. Small spines host weak synapses, and large spines host strong synapses. Because spine size is one factor which determines synaptic calcium concentration, it is likely that spine structural plasticity influences the rules of synaptic plasticity. We theoretically study the consequences of this observation, and find that different spine-size to synaptic-strength relationships can lead to qualitative differences in long-term synaptic strength dynamics and information storage. This novel theory unifies much existing disparate data, including the unimodal distribution of synaptic strength, the saturation of synaptic plasticity, and the stability of strong synapses

    Adaptive Estimators Show Information Compression in Deep Neural Networks

    Full text link
    To improve how neural networks function it is crucial to understand their learning process. The information bottleneck theory of deep learning proposes that neural networks achieve good generalization by compressing their representations to disregard information that is not relevant to the task. However, empirical evidence for this theory is conflicting, as compression was only observed when networks used saturating activation functions. In contrast, networks with non-saturating activation functions achieved comparable levels of task performance but did not show compression. In this paper we developed more robust mutual information estimation techniques, that adapt to hidden activity of neural networks and produce more sensitive measurements of activations from all functions, especially unbounded functions. Using these adaptive estimation techniques, we explored compression in networks with a range of different activation functions. With two improved methods of estimation, firstly, we show that saturation of the activation function is not required for compression, and the amount of compression varies between different activation functions. We also find that there is a large amount of variation in compression between different network initializations. Secondary, we see that L2 regularization leads to significantly increased compression, while preventing overfitting. Finally, we show that only compression of the last layer is positively correlated with generalization.Comment: Accepted as a poster presentation at ICLR 2019 and reviewed on OpenReview (available at https://openreview.net/forum?id=SkeZisA5t7). Pages: 11. Figures:

    Neural circuit function redundancy in brain disorders

    Get PDF
    Redundancy is a ubiquitous property of the nervous system. This means that vastly different configurations of cellular and synaptic components can enable the same neural circuit functions. However, until recently, very little brain disorder research has considered the implications of this characteristic when designing experiments or interpreting data. Here, we first summarise the evidence for redundancy in healthy brains, explaining redundancy and three related sub-concepts: sloppiness, dependencies and multiple solutions. We then lay out key implications for brain disorder research, covering recent examples of redundancy effects in experimental studies on psychiatric disorders. Finally, we give predictions for future experiments based on these concepts

    Signatures of Bayesian inference emerge from energy efficient synapses

    Full text link
    Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANN) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have 1) higher input firing rates and 2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.Comment: 29 pages, 11 figure

    Spontaneous action potentials and neural coding in unmyelinated axons

    Get PDF
    The voltage-gated Na and K channels in neurons are responsible for action potential generation. Because ion channels open and close in a stochastic fashion, spontaneous (ectopic) action potentials can result even in the absence of stimulation. While spontaneous action potentials have been studied in detail in single-compartment models, studies on spatially extended processes have been limited. The simulations and analysis presented here show that spontaneous rate in unmyelinated axon depends nonmonotonically on the length of the axon, that the spontaneous activity has sub-Poisson statistics, and that neural coding can be hampered by the spontaneous spikes by reducing the probability of transmitting the first spike in a train
    corecore